Active Regression with Adaptive Huber Loss
نویسندگان
چکیده
This paper addresses the scalar regression problem presenting a solution for optimizing the Huber loss in a general semi-supervised setting, which combines multi-view learning and manifold regularization. To this aim, we propose a principled algorithm to 1) avoid computationally expensive iterative solutions while 2) adapting the Huber loss threshold in a data-driven fashion and 3) actively balancing the use of labelled data to remove noisy or inconsistent annotations from the training stage. In a wide experimental evaluation, dealing with diverse applications, we assess the superiority of our paradigm which is able to combine strong performance and robustness to noise at a low computational cost.
منابع مشابه
Rights Creative Commons: Attribution 3.0 Hong Kong License A HUBER RECURSIVE LEAST SQUARES ADAPTIVE LATTICE FILTER FOR IMPULSE NOISE SUPPRESSION
This paper proposes a new adaptive filtering algorithm called the Huber Prior Error-Feedback Least Squares Lattice (H-PEF-LSL) algorithm for robust adaptive filtering in impulse noise environment. It minimizes a modified Huber M-estimator based cost function, instead of the least squares cost function. In addition, the simple modified Huber M-estimate cost function also allows us to perform the...
متن کاملA Huber recursive least squares adaptive lattice filter for impulse noise suppression
This paper proposes a new adaptive filtering algorithm called the Huber Prior Error-Feedback Least Squares Lattice (H-PEF-LSL) algorithm for robust adaptive filtering in impulse noise environment. It minimizes a modified Huber M-estimator based cost function, instead of the least squares cost function. In addition, the simple modified Huber M-estimate cost function also allows us to perform the...
متن کاملAdaptive Regularization of Some Inverse Problems in Image Analysis
We present an adaptive regularization scheme for optimizing composite energy functionals arising in image analysis problems. The scheme automatically trades off data fidelity and regularization depending on the current data fit during the iterative optimization, so that regularization is strongest initially, and wanes as data fidelity improves, with the weight of the regularizer being minimized...
متن کاملRecursive Finite Newton Algorithm for Support Vector Regression in the Primal
Some algorithms in the primal have been recently proposed for training support vector machines. This letter follows those studies and develops a recursive finite Newton algorithm (IHLF-SVR-RFN) for training nonlinear support vector regression. The insensitive Huber loss function and the computation of the Newton step are discussed in detail. Comparisons with LIBSVM 2.82 show that the proposed a...
متن کاملAdaptive Accelerated Gradient Converging Methods under Hölderian Error Bound Condition
Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its app...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1606.01568 شماره
صفحات -
تاریخ انتشار 2016